85 research outputs found
Hope and the city: a case study of the resiliency adaptations of British boys of African or Caribbean cultural heritage attending Year 7 at an urban secondary school.
Black-British young people are, on average, at least four times more likely to be excluded from school and experience significantly lower levels of academic attainment than their demographically matched white counterparts. This research adopts a social constructionist understanding of resilience to explore how ten Black-British students in an urban secondary school cope within their school and community. It is hoped that the case study of their resiliency adaptations will inform primary and secondary prevention.
The interview transcripts were analysed using Grounded Theory methods (Charmaz, 2006). This involved the continuous analysis and comparison of data. This process produced 81 focused codes and 19 memos. These were conceptualised into three categories, which formed âHope Theory.â This theory suggests that having educational and vocational aspirations are important in shaping how all young people, not just those of black British cultural heritage, engage in school and in moderating the effects of communities that are perceived as unstable and threatening. Key to hoped-for goals is the ability to identify viable pathways towards their completion and a sufficient sense of personal agency or self-efficacy to attempt them. Comparisons were drawn between Hope Theory and the extant literature, highlighting the Working Alliance as a tool that could help EPs and teachers build social and physical ecologies that support hope and resilience in young people
How the Internet of Things (IoT) is Adding Proactivity to Insurance
20 pagesIn recent years, the insurance industry has seen a major shift in how data is used, specifically in the realm of risk prevention. Advancements in technology within the Internet of Things (IoT) have enabled more comprehensive data analysis, changing the way the industry views risk. This has led to an increased emphasis on solutions that are proactive, preventing risk as opposed to merely mitigating losses. Despite the industry being historically slow-moving and focused on response to risk, new offerings are now promoting prediction and prevention of risk. This report will explore the implementation of the Internet of Things into the insurance industry. First, the concepts of IoT and proactivity will be described. The state of the insurance industry will then be examined, followed by the culture of innovation within insurance companies and how it holds a significant role in driving the industry forward. A select overview of insurtech solutions that contribute to the theme of proactivity within IoT will be detailed. To follow, looming adoption issues will be addressed. Finally, the report will outline up-and-coming strategies in the industry, including the rising trends of integration and gamification
Spitzer Space Telescope Measurements of Dust Reverberation Lags in the Seyfert 1 Galaxy NGC 6418
We present results from a fifteen-month campaign of high-cadence (~ 3 days)
mid-infrared Spitzer and optical (B and V ) monitoring of the Seyfert 1 galaxy
NGC 6418, with the objective of determining the characteristic size of the
dusty torus in this active galactic nucleus (AGN). We find that the 3.6 m
and 4.5 m flux variations lag behind those of the optical continuum by
days and days, respectively. We
report a cross-correlation time lag between the 4.5 m and 3.6 m flux
of days. The lags indicate that the dust emitting at 3.6
m and 4.5 m is located at a distance of approximately 1 light-month
(~ 0.03 pc) from the source of the AGN UV-optical continuum. The reverberation
radii are consistent with the inferred lower limit to the sublimation radius
for pure graphite grains at 1800 K, but smaller by a factor of ~ 2 than the
corresponding lower limit for silicate grains; this is similar to what has been
found for near-infrared (K-band) lags in other AGN. The 3.6 and 4.5 m
reverberation radii fall above the K-band
size-luminosity relationship by factors and ,
respectively, while the 4.5 m reverberation radius is only 27% larger than
the 3.6 m radius. This is broadly consistent with clumpy torus models, in
which individual optically thick clouds emit strongly over a broad wavelength
range.Comment: 13 pages, 9 figure
Recommended from our members
Genomic Profiling of Childhood Tumor Patient-Derived Xenograft Models to Enable Rational Clinical Trial Design.
Accelerating cures for children with cancer remains an immediate challenge as a result of extensive oncogenic heterogeneity between and within histologies, distinct molecular mechanisms evolving between diagnosis and relapsed disease, and limited therapeutic options. To systematically prioritize and rationally test novel agents in preclinical murine models, researchers within the Pediatric Preclinical Testing Consortium are continuously developing patient-derived xenografts (PDXs)-many of which are refractory to current standard-of-care treatments-from high-risk childhood cancers. Here, we genomically characterize 261 PDX models from 37 unique pediatric cancers; demonstrate faithful recapitulation of histologies and subtypes; and refine our understanding of relapsed disease. In addition, we use expression signatures to classify tumors for TP53 and NF1 pathway inactivation. We anticipate that these data will serve as a resource for pediatric oncology drug development and will guide rational clinical trial design for children with cancer
\u3cem\u3eSpitzer Space Telescope\u3c/em\u3e Measurements of Dust Reverberation Lags in the Seyfert 1 Galaxy NGC 6418
We present results from a 15 month campaign of high-cadence (~3 days) mid-infrared Spitzer and optical (B and V) monitoring of the Seyfert 1 galaxy NGC 6418, with the objective of determining the characteristic size of the dusty torus in this active galactic nucleus (AGN). . . .
For the remainder of the abstract, please visit:
http://dx.doi.org/10.1088/0004-637X/801/2/12
A simulation study for comparing testing statistics in response-adaptive randomization
<p>Abstract</p> <p>Background</p> <p>Response-adaptive randomizations are able to assign more patients in a comparative clinical trial to the tentatively better treatment. However, due to the adaptation in patient allocation, the samples to be compared are no longer independent. At large sample sizes, many asymptotic properties of test statistics derived for independent sample comparison are still applicable in adaptive randomization provided that the patient allocation ratio converges to an appropriate target asymptotically. However, the small sample properties of commonly used test statistics in response-adaptive randomization are not fully studied.</p> <p>Methods</p> <p>Simulations are systematically conducted to characterize the statistical properties of eight test statistics in six response-adaptive randomization methods at six allocation targets with sample sizes ranging from 20 to 200. Since adaptive randomization is usually not recommended for sample size less than 30, the present paper focuses on the case with a sample of 30 to give general recommendations with regard to test statistics for contingency tables in response-adaptive randomization at small sample sizes.</p> <p>Results</p> <p>Among all asymptotic test statistics, the Cook's correction to chi-square test (<it>T</it><sub><it>MC</it></sub>) is the best in attaining the nominal size of hypothesis test. The William's correction to log-likelihood ratio test (<it>T</it><sub><it>ML</it></sub>) gives slightly inflated type I error and higher power as compared with <it>T</it><sub><it>MC</it></sub>, but it is more robust against the unbalance in patient allocation. <it>T</it><sub><it>MC </it></sub>and <it>T</it><sub><it>ML </it></sub>are usually the two test statistics with the highest power in different simulation scenarios. When focusing on <it>T</it><sub><it>MC </it></sub>and <it>T</it><sub><it>ML</it></sub>, the generalized drop-the-loser urn (GDL) and sequential estimation-adjusted urn (SEU) have the best ability to attain the correct size of hypothesis test respectively. Among all sequential methods that can target different allocation ratios, GDL has the lowest variation and the highest overall power at all allocation ratios. The performance of different adaptive randomization methods and test statistics also depends on allocation targets. At the limiting allocation ratio of drop-the-loser (DL) and randomized play-the-winner (RPW) urn, DL outperforms all other methods including GDL. When comparing the power of test statistics in the same randomization method but at different allocation targets, the powers of log-likelihood-ratio, log-relative-risk, log-odds-ratio, Wald-type Z, and chi-square test statistics are maximized at their corresponding optimal allocation ratios for power. Except for the optimal allocation target for log-relative-risk, the other four optimal targets could assign more patients to the worse arm in some simulation scenarios. Another optimal allocation target, <it>R</it><sub><it>RSIHR</it></sub>, proposed by Rosenberger and Sriram (<it>Journal of Statistical Planning and Inference</it>, 1997) is aimed at minimizing the number of failures at fixed power using Wald-type Z test statistics. Among allocation ratios that always assign more patients to the better treatment, <it>R</it><sub><it>RSIHR </it></sub>usually has less variation in patient allocation, and the values of variation are consistent across all simulation scenarios. Additionally, the patient allocation at <it>R</it><sub><it>RSIHR </it></sub>is not too extreme. Therefore, <it>R</it><sub><it>RSIHR </it></sub>provides a good balance between assigning more patients to the better treatment and maintaining the overall power.</p> <p>Conclusion</p> <p>The Cook's correction to chi-square test and Williams' correction to log-likelihood-ratio test are generally recommended for hypothesis test in response-adaptive randomization, especially when sample sizes are small. The generalized drop-the-loser urn design is the recommended method for its good overall properties. Also recommended is the use of the <it>R</it><sub><it>RSIHR </it></sub>allocation target.</p
Treatment Combinations for Alzheimer's Disease: Current and Future Pharmacotherapy Options
Although Alzheimer's disease (AD) is the world's leading cause of dementia and the population of patients with AD continues to grow, no new therapies have been approved in more than a decade. Many clinical trials of single-agent therapies have failed to affect disease progression or symptoms compared with placebo. The complex pathophysiology of AD may necessitate combination treatments rather than monotherapy. The goal of this narrative literature review is to describe types of combination therapy, review the current clinical evidence for combination therapy regimens (both symptomatic and disease-modifying) in the treatment of AD, describe innovative clinical trial study designs that may be effective in testing combination therapy, and discuss the regulatory and drug development landscape for combination therapy. Successful combination therapies in other complex disorders, such as human immunodeficiency virus, may provide useful examples of a potential path forward for AD treatment.This article is freely available via Open Access. Click on the Publisher URL to access it via the publisher's site.The authors received medical writing assistance for development of this manuscript from Maria
Hovenden, PhD, and Lauren Stutzbach, PhD, at Complete Publication Solutions, LLC (North Wales,
PA; a CHC Group company), which was funded by Lundbeck. JLC acknowledges support from a
COBRE NIH/MIGMS grant (P20GM109025) and Keep Memory Alive.published version, accepted version, submitted versio
The Alpine Fault Hangingwall Viewed From Within: Structural Analysis of Ultrasonic Image Logs in the DFDP-2B Borehole, New Zealand
International audienceUltrasonic image logs acquired in the DFDPâ2B borehole yield the first continuous, subsurface description of the transition from schist to mylonite in the hangingwall of the Alpine Fault, New Zealand, to a depth of 818 m below surface. Three feature sets are delineated. One set, comprising foliation and foliationâparallel veins and fractures, has a constant orientation. The average dip direction of 145° is subparallel to the dip direction of the Alpine Fault, and the average dip magnitude of 60° is similar to nearby outcrop observations of foliation in the Alpine mylonites that occur immediately above the Alpine Fault. We suggest that this foliation orientation is similar to the Alpine Fault plane at âŒ1 km depth in the Whataroa valley. The other two auxiliary feature sets are interpreted as joints based on their morphology and orientation. Subvertical joints with NWâSE (137°) strike occurring dominantly above âŒ500 m are interpreted as being formed during the exhumation and unloading of the Alpine Fault's hangingwall. Gently dipping joints, predominantly observed below âŒ500 m, are interpreted as inherited hydrofractures exhumed from their depth of formation. These three fracture sets, combined with subsidiary brecciated fault zones, define the fluid pathways and anisotropic permeability directions. In addition, high topographic relief, which perturbs the stress tensor, likely enhances the slip potential and thus permeability of subvertical fractures below the ridges, and of gently dipping fractures below the valleys. Thus, DFDPâ2B borehole observations support the inference of a large zone of enhanced permeability in the hangingwall of the Alpine Fault
Petrophysical, Geochemical, and Hydrological Evidence for Extensive Fracture-Mediated Fluid and Heat Transport in the Alpine Fault's Hanging-Wall Damage Zone
International audienceFault rock assemblages reflect interaction between deformation, stress, temperature, fluid, and chemical regimes on distinct spatial and temporal scales at various positions in the crust. Here we interpret measurements made in the hangingâwall of the Alpine Fault during the second stage of the Deep Fault Drilling Project (DFDPâ2). We present observational evidence for extensive fracturing and high hangingâwall hydraulic conductivity (âŒ10â9 to 10â7 m/s, corresponding to permeability of âŒ10â16 to 10â14 m2) extending several hundred meters from the fault's principal slip zone. Mud losses, gas chemistry anomalies, and petrophysical data indicate that a subset of fractures intersected by the borehole are capable of transmitting fluid volumes of several cubic meters on time scales of hours. DFDPâ2 observations and other data suggest that this hydrogeologically active portion of the fault zone in the hangingâwall is several kilometers wide in the uppermost crust. This finding is consistent with numerical models of earthquake rupture and offâfault damage. We conclude that the mechanically and hydrogeologically active part of the Alpine Fault is a more dynamic and extensive feature than commonly described in models based on exhumed faults. We propose that the hydrogeologically active damage zone of the Alpine Fault and other large active faults in areas of high topographic relief can be subdivided into an inner zone in which damage is controlled principally by earthquake rupture processes and an outer zone in which damage reflects coseismic shaking, strain accumulation and release on interseismic timescales, and inherited fracturing related to exhumation
Surface rupture of multiple crustal faults in the 2016 Mw 7.8 KaikĆura, New Zealand, earthquake
Multiple (>20
>20
) crustal faults ruptured to the ground surface and seafloor in the 14 November 2016 M w
Mw
7.8 KaikĆura earthquake, and many have been documented in detail, providing an opportunity to understand the factors controlling multifault ruptures, including the role of the subduction interface. We present a summary of the surface ruptures, as well as previous knowledge including paleoseismic data, and use these data and a 3D geological model to calculate cumulative geological moment magnitudes (M G w
MwG
) and seismic moments for comparison with those from geophysical datasets. The earthquake ruptured faults with a wide range of orientations, sense of movement, slip rates, and recurrence intervals, and crossed a tectonic domain boundary, the Hope fault. The maximum net surface displacement was âŒ12ââm
âŒ12ââm
on the Kekerengu and the Papatea faults, and average displacements for the major faults were 0.7â1.5 m south of the Hope fault, and 5.5â6.4 m to the north. M G w
MwG
using two different methods are M G w
MwG
7.7 +0.3 â0.2
7.7â0.2+0.3
and the seismic moment is 33%â67% of geophysical datasets. However, these are minimum values and a best estimate M G w
MwG
incorporating probable larger slip at depth, a 20 km seismogenic depth, and likely listric geometry is M G w
MwG
7.8±0.2
7.8±0.2
, suggests â€32%
â€32%
of the moment may be attributed to slip on the subduction interface and/or a midcrustal detachment. Likely factors contributing to multifault rupture in the KaikĆura earthquake include (1) the presence of the subduction interface, (2) physical linkages between faults, (3) rupture of geologically immature faults in the south, and (4) inherited geological structure. The estimated recurrence interval for the KaikĆura earthquake is â„5,000â10,000ââyrs
â„5,000â10,000ââyrs
, and so it is a relatively rare event. Nevertheless, these findings support the need for continued advances in seismic hazard modeling to ensure that they incorporate multifault ruptures that cross tectonic domain boundaries
- âŠ